75 research outputs found
Contextual emergence of intentionality
By means of an intriguing physical example, magnetic surface swimmers, that
can be described in terms of Dennett's intentional stance, I reconstruct a
hierarchy of necessary and sufficient conditions for the applicability of the
intentional strategy. It turns out that the different levels of the intentional
hierarchy are contextually emergent from their respective subjacent levels by
imposing stability constraints upon them. At the lowest level of the hierarchy,
phenomenal physical laws emerge for the coarse-grained description of open,
nonlinear, and dissipative nonequilibrium systems in critical states. One level
higher, dynamic patterns, such as, e.g., magnetic surface swimmers, are
contextually emergent as they are invariant under certain symmetry operations.
Again one level up, these patterns behave apparently rational by selecting
optimal pathways for the dissipation of energy that is delivered by external
gradients. This is in accordance with the restated Second Law of thermodynamics
as a stability criterion. At the highest level, true believers are intentional
systems that are stable under exchanging their observation conditions.Comment: 27 pages; 4 figures (Fig 1. Copyright by American Physical Society);
submitted to Journal of Consciousness Studie
A biophysical observation model for field potentials of networks of leaky integrate-and-fire neurons
We present a biophysical approach for the coupling of neural network activity
as resulting from proper dipole currents of cortical pyramidal neurons to the
electric field in extracellular fluid. Starting from a reduced threecompartment
model of a single pyramidal neuron, we derive an observation model for
dendritic dipole currents in extracellular space and thereby for the dendritic
field potential that contributes to the local field potential of a neural
population. This work aligns and satisfies the widespread dipole assumption
that is motivated by the "open-field" configuration of the dendritic field
potential around cortical pyramidal cells. Our reduced three-compartment scheme
allows to derive networks of leaky integrate-and-fire models, which facilitates
comparison with existing neural network and observation models. In particular,
by means of numerical simulations we compare our approach with an ad hoc model
by Mazzoni et al. [Mazzoni, A., S. Panzeri, N. K. Logothetis, and N. Brunel
(2008). Encoding of naturalistic stimuli by local field potential spectra in
networks of excitatory and inhibitory neurons. PLoS Computational Biology 4
(12), e1000239], and conclude that our biophysically motivated approach yields
substantial improvement.Comment: 31 pages, 4 figure
Inverse problems in neural field theory
We study inverse problems in neural field theory, i.e., the construction of synaptic weight kernels yielding a prescribed neural field dynamics. We address the issues of existence, uniqueness, and stability of solutions to the inverse problem for the Amari neural field equation as a special case, and prove that these problems are generally ill-posed. In order to construct solutions to the inverse problem, we first recast the Amari equation into a linear perceptron equation in an infinite-dimensional Banach or Hilbert space. In a second step, we construct sets of biorthogonal function systems allowing the approximation of synaptic weight kernels by a generalized Hebbian learning rule. Numerically, this construction is implemented by the Moore–Penrose pseudoinverse method. We demonstrate the instability of these solutions and use the Tikhonov regularization method for stabilization and to prevent numerical overfitting. We illustrate the stable construction of kernels by means of three instructive examples
Machine Semiotics
Despite their satisfactory speech recognition capabilities, current speech
assistive devices still lack suitable automatic semantic analysis capabilities
as well as useful representation of pragmatic world knowledge. Instead, current
technologies require users to learn keywords necessary to effectively operate
and work with a machine. Such a machine-centered approach can be frustrating
for users. However, recognizing a basic difference between the semiotics of
humans and machines presents a possibility to overcome this shortcoming: For
the machine, the meaning of a (human) utterance is defined by its own scope of
actions. Machines, thus, do not need to understand the meanings of individual
words, nor the meaning of phrasal and sentence semantics that combine
individual word meanings with additional implicit world knowledge. For speech
assistive devices, the learning of machine specific meanings of human
utterances by trial and error should be sufficient. Using the trivial example
of a cognitive heating device, we show that -- based on dynamic semantics --
this process can be formalized as the learning of utterance-meaning pairs
(UMP). This is followed by a detailed semiotic contextualization of the
previously generated signs.Comment: 37 pages, 4 table
- …